And I don’t mind if the people too biased to read past style issues ignore me—I consider it a benefit that they will self-select themselves for me like that.
Do you feel as if anyone who reads Less Wrong is getting any benefit out of your post?
And the assumptions you’re probably making about my goals here (required to judge if I got what I wanted, or not) are not correct either.
I don’t think I’m making any assumptions about this. What are your goals? I can think of several off the top of my head:
Convince others of your ideas.
Get feedback on your ideas to help you further refine them.
Perform a social experiment on LW (use combative tone, see results)
Goal #3 and related “trollish” goals are the only ones I can think of off the top of my head that benefit from a combative tone.
I mean...why write at all if you don’t want most people to look at the actual content of your writing?
These aren’t attacks in question form. These are honest questions.
Do you feel as if anyone who reads Less Wrong is getting any benefit out of your post?
I personally enjoyed confirming that this thread sent curi below the threshold at which anyone who is not a prominent SIAI donor can make posts. Does that count?
Do you feel as if anyone who reads Less Wrong is getting any benefit out of your post?
Haven’t you noticed some people said positive things in comments?
You say you aren’t making “any assumptions” but then you give some. As an example, I’m not here to do (1) -- if it happens, cool, but it’s not my goal. That was, apparently, your assumption.
I mean...why write at all if you don’t want most people to look at the actual content of your writing?
The halfway sane people can read the content. I haven’t actually said anything I shouldn’t have. I haven’t actually said anything combative, except in a few particular instances where it was intentional (none in this discussion topic). I’ve only said things that I’m vaguely aware offend some silly cultural biases. The reason I said “biased idiots” is not to fight with anyone but because it clearly and accurately expresses the idea I had in mind (and also because the concept of bias is a topic of interest here). There is a serious disrespect for people as rational beings prevalent in the culture here.
Do people really get offended because of language? Maybe sometimes. I think most of it is not because of that. It’s the substance. I think the attitude behind the conjunction fallacy, and the studies, and various other things popular here, is anti-human. It’s treating humans like dirt, like idiots, like animals. That’s false and grossly immoral. I wonder how many people are now more offended due to the clarification, and how many less.
These aren’t attacks in question form. These are honest questions.
That’s what I thought. And if I didn’t, saying it would change nothing because asserting it is not an argument.
Unless, perhaps, it hadn’t even occurred to me at all. I think there’s a clash of world views and I’m not interested in changing mine for the purpose of alleviating disagreement. e.g. my writing in general is focussed on better people than the ones who it wouldn’t occur to to consider that questions weren’t attacks. They are the target audience I care about, usually. You’re so used to dealing with bad people that they occupy attention in your mind. I usually don’t want them to get attention in mind. There’s better people in the world, and better things to think about.
There is a serious disrespect for people as rational beings prevalent in the culture here.
People aren’t completely rational beings. Pretending they are is showing more disrespect than acknowledging that we have flaws.
It’s treating humans like dirt, like idiots,
Not like dirt. Not like idiots. As though we sometimes act as idiots, yes. Because we sometimes do. You seem to have trouble confusing “always” and “sometimes”.
like animals.
We are, in fact, animals. We’re a type of ape that has a very big brain, as primates go. We have many differences from the rest of the animals, but the similarities with other animals should be clear enough that it would be a severe mistake to not call us animals.
That’s false and grossly immoral.
Ah, now this is a very honest and revealing statement. These are two separate issues, of course. Statements can be true or false, and actions can be grossly immoral, and not the reverse. Yet they’re linked in your mind. Why is that? Did you decide theses statements(*) were false and thus holding them grossly immoral (or to be charitable, likely to lead to grossly immoral actions), or were you offended at the statements and thus decided they must be false?
I wonder how many people are now more offended due to the clarification, and how many less.
Not offended. Just saddened.
(*) Rather, your misunderstanding of them. “People can think P(X&Y) > P(X)” is not the same as “People always think P(X&Y) > P(X) for all X and Y”. Yes, of course special X and Y have to be selected to demonstrate this. There is a wide range between “humans aren’t perfect” and “humans are idiots”. The point of studies like this is not to assert humans are idiots, or bad at reasoning, or worthless. The point is to find out where and when normal human reasoning breaks down. Not doing these studies doesn’t change the fact that humans aren’t perfectly rational, it merely hides the exact contours of when our reasoning breaks down.
The possible (asserted, not shown) existence of an agenda might indeed have something to do with what studies were done and how the interpreters presented them. The morality or immorality of this agenda is what is a separate issue.
I think the attitude behind the conjunction fallacy, and the studies, and various other things popular here, is anti-human. It’s treating humans like dirt, like idiots, like animals. That’s false and grossly immoral.
This seems terribly backwards to me. In order to become more rational, to think better, we must examine our built-in biases. We don’t do and use such studies simply to point out the results and laugh at “idiots”; we must first understand the problems we face, the places where our brains don’t work well, so that we can fix them and improve ourselves. Utilizing the lens that sees its flaws is inherently pro-human. (Note that “pro-human” should be taken to mean something reasonable like “wanting humans to be the best that they can be,” which does involve admitting that humans aren’t at that point right now. That is only “anti-human” in the sense that Americans who want to improve their country are “anti-America.”)
This seems terribly backwards to me. In order to become more rational, to think better, we must examine our built-in biases.
I wasn’t talking about examining biases that exist. I was talking about making up biases, and finding ways to claim the authority of science for them.
These studies, and others, are wrong (as I explained in my post). And not randomly. They don’t reach random conclusions or make random errors. All the conclusions fit a particular agenda which has a low opinion of humans. (Low by what standard? The average man-on-the-street American Christian, for example. These studies have a significantly lower opinion of humans than our culture in general does.)
We don’t do and use such studies simply to point out the results and laugh at “idiots”;
Can you understand that I didn’t say anything about laughing? And that replies like this are not productive, rational discussion.
People who demean all humans are demeaning themselves too. Who are they to laugh at?
Utilizing the lens that sees its flaws is inherently pro-human.
Do you understand that I didn’t say that the “lens that sees its flaws” was the anti-human?
we must first understand the problems we face, the places where our brains don’t work well,
Can you understand that the notion that our brains don’t work well, in some places, is itself a substantive assertion (for which you provide no argument, and for which these bad studies attempt to provide authority)?
Can you see that this claim is in the direction of thinking humans are a less awesome thing than they would otherwise be?
I’ll stop here for now. If you can handle these basic introductory issues, I’ll move on to some further explanation of what I was talking about. If you, for example, try to interpret these statements as my substantive arguments on the topic, and complain that they aren’t very good arguments for my initial claim, then I will stop speaking with you.
These studies, and others, are wrong (as I explained in my post). And not randomly. They don’t reach random conclusions or make random errors. All the conclusions fit a particular agenda which has a low opinion of humans. (Low by what standard? The average man-on-the-street American Christian, for example. These studies have a significantly lower opinion of humans than our culture in general does.)
Your post only served to highlight your own misunderstanding of the topic. From this, you’ve proceeded to imply an “agenda” behind various research and conclusions that you deem faulty. To be blunt: you sound like a conspiracy theorist at this point.
Can you understand that the notion that our brains don’t work well, in some places, is itself a substantive assertion (for which you provide no argument, and for which these bad studies attempt to provide authority)?
Also, at this point it sounds like you’re throwing out all studies of cognitive bias based on your problems with one specific area.
Can you see that this claim is in the direction of thinking humans are a less awesome thing than they would otherwise be?
Having a low opinion of default human mental capabilities does not necessarily extend to having a low opinion of humans in general. Humans are not static. Having an undeservedly high opinion of default human mental capabilities is a barrier to self-improvement.
You constantly attack trivial side-points, and peculiarities of wording, putting you somewhere in the DH3-5 region, not bad but below what we are aiming for on this site, which is DH6 or preferably Black Belt Bayesian’s new DH7 (deliberately improving you opponents arguments for them, refuting not only the argument but the strongest thing which can be constructed from its corpse).
Can you understand that I didn’t say anything about laughing? And that replies like this are not productive, rational discussion.
This is a good example. Yes, he used the word laugh and you didn’t, but that is not important to the point he was making, nor does it significantly change the meaning or the strength of the argument to remove it. If wish to reach the higher levels of the DH you should try to ignore things like that and refute the main point.
Do you understand that I didn’t say that the “lens that sees its flaws” was the anti-human?
This is another example. You never used the phrase ‘lens that sees its flaws’ but since the heuristics and biases program is absolutely central to the lens that sees its flaws and you accused that of being anti-human his central point still stands.
Side-points are not trivial, and they are not unimportant, nor irrelevant (some can sometimes be irrelevant. These aren’t, in my view.)
Every mistake matters. Learning is hard. Understanding people very different than you is hard. Agreeing about which points are trivial, with people who see the world differently, is hard to.
To deal with the huge difficulty of things like learning, changing minds, making progress, it’s important to do things like respect every mistake and try to fix them all, not dismiss some as too small and gloss them over. Starting small is the best approach; only gradual progress works; trying to address the big issues straight away does not work.
Black Belt Bayesian’s new DH7
New? Do you really think that’s a new idea? Popper talked about it long ago.
Yes, he used the word laugh and you didn’t, but that is not important
But I regarded it as important in several ways.
nor does it significantly change the meaning or the strength of the argument to remove it.
But 1) I think it does 2) I think that by rewriting it without the mistake he might learn something; he might find out it does matter; we might have a little more common ground. I don’t think skipping steps like this is wise.
To deal with the huge difficulty of things like learning, changing minds, making progress, it’s important to do things like respect every mistake and try to fix them all, not dismiss some as too small and gloss them over. Starting small is the best approach; only gradual progress works; trying to address the big issues straight away does not work.
You do not take into account time constraints. I have not been keeping count, but discussions with you alone have probably eaten up about 2 hours of my time, quite a lot considering the number of other tasks I am under pressure to perform.
Your approach may work, but in terms of progress made per second, it is innefficient. Consider how long it takes for every mistake:
1) First you have to notice the mistake
2) Then you have to take the time to write about it
3) Then the other person has to take the time to read it
4) Then they have to take the time to explain it to you / own up to it
5) Then they have to pay extra attention while they type, for fear they will make a similar mistake and you will waste their time again
6) Then, everybody else reading the comments will have to read through that discussion, and we really don’t care about such a tiny mistake, really, not at all!.
Is it worth it for the tiny gains?
I think a better approach is to go straight for the central claim. If you and they continue to disagree, consider the possibility that you are biased, and question why their point might seem wrong to you, what could be causing you not to see their point, do you have a line of retreat? (You trust that he will be doing the same, don’t make accusations until you are sure of yourself). If that doesn’t work, you start tabooing words, an amazingly high proportion of the time the disagreement just vanishes. If that doesn’t work you either continue to look for reasons why you might be wrong,, and continue explaining your position more clearly and trying to understand theirs.
You do not nitpick.
It wastes everybody’s time.
New? Do you really think that’s a new idea? Popper talked about it long ago.
If Popper talked about it then why don’t you do it!
If Popper talked about it then why don’t you do it!
The more you’re having a hard time communicating and cooperating with people, the less you should skip steps. So, I will consider the best version in my mind to see if it’s a challenge to my views. But as far as trying to make progress in the discussions, I’ll try to do something simpler and easier (or, sometimes, challenging to filter people).
You do not take into account time constraints.
Rushing doesn’t solve problems better. If people haven’t got time to learn, so be it. But there’s no magic short cuts.
Your approach may work, but in terms of progress made per second, it is innefficient.
I disagree. So many conversations have basically 0 progress. Some is better than none. You have to do what actually works.
Is it worth it for the tiny gains?
Yes. That’s exactly where human progress comes from.
I think a better approach is to go straight for the central claim.
But there’s so much disagreement in background assumptions to sort out, that this often fails. If you want to do this, publish it. Get 100,000 readers. Then some will like it. In a one on one conversation with someone rather different than you, the odds are not good that you will be able to agree about the central claim while still disagreeing about a ton of other relevant stuff.
Sometimes you can do stuff like find some shared premises and then make arguments that rely only on those which reach conclusions about the central claim. But that kind of super hard when you think so differently that there’s miscommunication every other sentence.
People so badly underestimate the difficulty of communicating in general, and how much it usually relies on shared background/cultural assumptions, and how much misunderstanding there really is when talking to people rather different than yourself, and how much effort it takes to get past those misunderstandings.
BTW I would like to point out that this is one of the ways that Popperians are much more into rigor and detail than Bayesians, apparently. I got a bunch of complaints about how Popperians aren’t rigorous enough. But we think these kinds of details really matter a lot! We aren’t unrigorous, we just have different conceptions of what kind of rigor is important.
If you and they continue to disagree, consider the possibility that you are biased, and question why their point might seem wrong to you, what could be causing you not to see their point, do you have a line of retreat?
Yes I do that. And like 3 levels more advanced, too. Shrug.
If that doesn’t work, you start tabooing words, an amazingly high proportion of the time the disagreement just vanishes.
You’re used to talking with people who share a lot of core ideas with you. But I’m not such a person. Tabooing words will not make our disagreements vanish because I have a fundamentally different worldview than you, in various respects. It’s not a merely verbal disagreement.
5) Then they have to pay extra attention while they type, for fear they will make a similar mistake and you will waste their time again
Fear is not the right attitude. And nor is “extra attention”. One has to integrate his knowledge into his mind so that he can use it without it taking up extra attention. Usually new skills take extra attention at first but as you get better with them the attention burden goes way down.
They are the target audience I care about, usually. You’re so used to dealing with bad people that they occupy attention in your mind. I usually don’t want them to get attention in mind. There’s better people in the world, and better things to think about.
You say you aren’t making “any assumptions” but then you give some. As an example, I’m not here to do (1) -- if it happens, cool, but it’s not my goal. That was, apparently, your assumption.
I didn’t say I was making any of those assumptions. In fact, I specifically said I wasn’t assuming any of them.
Do you feel as if anyone who reads Less Wrong is getting any benefit out of your post?
I don’t think I’m making any assumptions about this. What are your goals? I can think of several off the top of my head:
Convince others of your ideas.
Get feedback on your ideas to help you further refine them.
Perform a social experiment on LW (use combative tone, see results)
Goal #3 and related “trollish” goals are the only ones I can think of off the top of my head that benefit from a combative tone.
I mean...why write at all if you don’t want most people to look at the actual content of your writing?
These aren’t attacks in question form. These are honest questions.
I personally enjoyed confirming that this thread sent curi below the threshold at which anyone who is not a prominent SIAI donor can make posts. Does that count?
See this comment:
Haven’t you noticed some people said positive things in comments?
You say you aren’t making “any assumptions” but then you give some. As an example, I’m not here to do (1) -- if it happens, cool, but it’s not my goal. That was, apparently, your assumption.
The halfway sane people can read the content. I haven’t actually said anything I shouldn’t have. I haven’t actually said anything combative, except in a few particular instances where it was intentional (none in this discussion topic). I’ve only said things that I’m vaguely aware offend some silly cultural biases. The reason I said “biased idiots” is not to fight with anyone but because it clearly and accurately expresses the idea I had in mind (and also because the concept of bias is a topic of interest here). There is a serious disrespect for people as rational beings prevalent in the culture here.
Do people really get offended because of language? Maybe sometimes. I think most of it is not because of that. It’s the substance. I think the attitude behind the conjunction fallacy, and the studies, and various other things popular here, is anti-human. It’s treating humans like dirt, like idiots, like animals. That’s false and grossly immoral. I wonder how many people are now more offended due to the clarification, and how many less.
That’s what I thought. And if I didn’t, saying it would change nothing because asserting it is not an argument.
Unless, perhaps, it hadn’t even occurred to me at all. I think there’s a clash of world views and I’m not interested in changing mine for the purpose of alleviating disagreement. e.g. my writing in general is focussed on better people than the ones who it wouldn’t occur to to consider that questions weren’t attacks. They are the target audience I care about, usually. You’re so used to dealing with bad people that they occupy attention in your mind. I usually don’t want them to get attention in mind. There’s better people in the world, and better things to think about.
People aren’t completely rational beings. Pretending they are is showing more disrespect than acknowledging that we have flaws.
Not like dirt. Not like idiots. As though we sometimes act as idiots, yes. Because we sometimes do. You seem to have trouble confusing “always” and “sometimes”.
We are, in fact, animals. We’re a type of ape that has a very big brain, as primates go. We have many differences from the rest of the animals, but the similarities with other animals should be clear enough that it would be a severe mistake to not call us animals.
Ah, now this is a very honest and revealing statement. These are two separate issues, of course. Statements can be true or false, and actions can be grossly immoral, and not the reverse. Yet they’re linked in your mind. Why is that? Did you decide theses statements(*) were false and thus holding them grossly immoral (or to be charitable, likely to lead to grossly immoral actions), or were you offended at the statements and thus decided they must be false?
Not offended. Just saddened.
(*) Rather, your misunderstanding of them. “People can think P(X&Y) > P(X)” is not the same as “People always think P(X&Y) > P(X) for all X and Y”. Yes, of course special X and Y have to be selected to demonstrate this. There is a wide range between “humans aren’t perfect” and “humans are idiots”. The point of studies like this is not to assert humans are idiots, or bad at reasoning, or worthless. The point is to find out where and when normal human reasoning breaks down. Not doing these studies doesn’t change the fact that humans aren’t perfectly rational, it merely hides the exact contours of when our reasoning breaks down.
They are not separate issues. The reason for the false statements is the moral agenda.
The possible (asserted, not shown) existence of an agenda might indeed have something to do with what studies were done and how the interpreters presented them. The morality or immorality of this agenda is what is a separate issue.
This seems terribly backwards to me. In order to become more rational, to think better, we must examine our built-in biases. We don’t do and use such studies simply to point out the results and laugh at “idiots”; we must first understand the problems we face, the places where our brains don’t work well, so that we can fix them and improve ourselves. Utilizing the lens that sees its flaws is inherently pro-human. (Note that “pro-human” should be taken to mean something reasonable like “wanting humans to be the best that they can be,” which does involve admitting that humans aren’t at that point right now. That is only “anti-human” in the sense that Americans who want to improve their country are “anti-America.”)
I wasn’t talking about examining biases that exist. I was talking about making up biases, and finding ways to claim the authority of science for them.
These studies, and others, are wrong (as I explained in my post). And not randomly. They don’t reach random conclusions or make random errors. All the conclusions fit a particular agenda which has a low opinion of humans. (Low by what standard? The average man-on-the-street American Christian, for example. These studies have a significantly lower opinion of humans than our culture in general does.)
Can you understand that I didn’t say anything about laughing? And that replies like this are not productive, rational discussion.
People who demean all humans are demeaning themselves too. Who are they to laugh at?
Do you understand that I didn’t say that the “lens that sees its flaws” was the anti-human?
Can you understand that the notion that our brains don’t work well, in some places, is itself a substantive assertion (for which you provide no argument, and for which these bad studies attempt to provide authority)?
Can you see that this claim is in the direction of thinking humans are a less awesome thing than they would otherwise be?
I’ll stop here for now. If you can handle these basic introductory issues, I’ll move on to some further explanation of what I was talking about. If you, for example, try to interpret these statements as my substantive arguments on the topic, and complain that they aren’t very good arguments for my initial claim, then I will stop speaking with you.
Please, please, please stop speaking with us. Uncle! Uncle!
Your post only served to highlight your own misunderstanding of the topic. From this, you’ve proceeded to imply an “agenda” behind various research and conclusions that you deem faulty. To be blunt: you sound like a conspiracy theorist at this point.
http://wiki.lesswrong.com/wiki/Bias.
Also, at this point it sounds like you’re throwing out all studies of cognitive bias based on your problems with one specific area.
Having a low opinion of default human mental capabilities does not necessarily extend to having a low opinion of humans in general. Humans are not static. Having an undeservedly high opinion of default human mental capabilities is a barrier to self-improvement.
Can you understand this Paul Graham essay
And see where on it your above comment falls?
I’m familiar with it. Why don’t you say which issue you had in mind.
Do you understand that my goal wasn’t to be as convincing as possible, and to make it as easy as possible for the person I was talking to?
You constantly attack trivial side-points, and peculiarities of wording, putting you somewhere in the DH3-5 region, not bad but below what we are aiming for on this site, which is DH6 or preferably Black Belt Bayesian’s new DH7 (deliberately improving you opponents arguments for them, refuting not only the argument but the strongest thing which can be constructed from its corpse).
This is a good example. Yes, he used the word laugh and you didn’t, but that is not important to the point he was making, nor does it significantly change the meaning or the strength of the argument to remove it. If wish to reach the higher levels of the DH you should try to ignore things like that and refute the main point.
This is another example. You never used the phrase ‘lens that sees its flaws’ but since the heuristics and biases program is absolutely central to the lens that sees its flaws and you accused that of being anti-human his central point still stands.
Side-points are not trivial, and they are not unimportant, nor irrelevant (some can sometimes be irrelevant. These aren’t, in my view.)
Every mistake matters. Learning is hard. Understanding people very different than you is hard. Agreeing about which points are trivial, with people who see the world differently, is hard to.
To deal with the huge difficulty of things like learning, changing minds, making progress, it’s important to do things like respect every mistake and try to fix them all, not dismiss some as too small and gloss them over. Starting small is the best approach; only gradual progress works; trying to address the big issues straight away does not work.
New? Do you really think that’s a new idea? Popper talked about it long ago.
But I regarded it as important in several ways.
But 1) I think it does 2) I think that by rewriting it without the mistake he might learn something; he might find out it does matter; we might have a little more common ground. I don’t think skipping steps like this is wise.
You do not take into account time constraints. I have not been keeping count, but discussions with you alone have probably eaten up about 2 hours of my time, quite a lot considering the number of other tasks I am under pressure to perform.
Your approach may work, but in terms of progress made per second, it is innefficient. Consider how long it takes for every mistake:
1) First you have to notice the mistake
2) Then you have to take the time to write about it
3) Then the other person has to take the time to read it
4) Then they have to take the time to explain it to you / own up to it
5) Then they have to pay extra attention while they type, for fear they will make a similar mistake and you will waste their time again
6) Then, everybody else reading the comments will have to read through that discussion, and we really don’t care about such a tiny mistake, really, not at all!.
Is it worth it for the tiny gains?
I think a better approach is to go straight for the central claim. If you and they continue to disagree, consider the possibility that you are biased, and question why their point might seem wrong to you, what could be causing you not to see their point, do you have a line of retreat? (You trust that he will be doing the same, don’t make accusations until you are sure of yourself). If that doesn’t work, you start tabooing words, an amazingly high proportion of the time the disagreement just vanishes. If that doesn’t work you either continue to look for reasons why you might be wrong,, and continue explaining your position more clearly and trying to understand theirs.
You do not nitpick.
It wastes everybody’s time.
If Popper talked about it then why don’t you do it!
The more you’re having a hard time communicating and cooperating with people, the less you should skip steps. So, I will consider the best version in my mind to see if it’s a challenge to my views. But as far as trying to make progress in the discussions, I’ll try to do something simpler and easier (or, sometimes, challenging to filter people).
Rushing doesn’t solve problems better. If people haven’t got time to learn, so be it. But there’s no magic short cuts.
I disagree. So many conversations have basically 0 progress. Some is better than none. You have to do what actually works.
Yes. That’s exactly where human progress comes from.
But there’s so much disagreement in background assumptions to sort out, that this often fails. If you want to do this, publish it. Get 100,000 readers. Then some will like it. In a one on one conversation with someone rather different than you, the odds are not good that you will be able to agree about the central claim while still disagreeing about a ton of other relevant stuff.
Sometimes you can do stuff like find some shared premises and then make arguments that rely only on those which reach conclusions about the central claim. But that kind of super hard when you think so differently that there’s miscommunication every other sentence.
People so badly underestimate the difficulty of communicating in general, and how much it usually relies on shared background/cultural assumptions, and how much misunderstanding there really is when talking to people rather different than yourself, and how much effort it takes to get past those misunderstandings.
BTW I would like to point out that this is one of the ways that Popperians are much more into rigor and detail than Bayesians, apparently. I got a bunch of complaints about how Popperians aren’t rigorous enough. But we think these kinds of details really matter a lot! We aren’t unrigorous, we just have different conceptions of what kind of rigor is important.
Yes I do that. And like 3 levels more advanced, too. Shrug.
You’re used to talking with people who share a lot of core ideas with you. But I’m not such a person. Tabooing words will not make our disagreements vanish because I have a fundamentally different worldview than you, in various respects. It’s not a merely verbal disagreement.
Fear is not the right attitude. And nor is “extra attention”. One has to integrate his knowledge into his mind so that he can use it without it taking up extra attention. Usually new skills take extra attention at first but as you get better with them the attention burden goes way down.
Are you interested in changing your worldview if it happens to be incorrect?
That’s a good way to get caught in an affective death spiral.
Yes of course.
I didn’t say I was making any of those assumptions. In fact, I specifically said I wasn’t assuming any of them.
You didn’t say that. Apparently you meant you were stating possible, not actual goals. But you didn’t say that either. Your bad. Agreed?
No, but I don’t see any point in continuing this.